We present a strong object detector with encoder-decoder pretraining and finetuning. Our method, called Group DETR v2, is built upon a vision transformer encoder ViT-Huge~\cite{dosovitskiy2020image}, a DETR variant DINO~\cite{zhang2022dino}, and an efficient DETR training method Group DETR~\cite{chen2022group}. The training process consists of self-supervised pretraining and finetuning a ViT-Huge encoder on ImageNet-1K, pretraining the detector on Object365, and finally finetuning it on COCO. Group DETR v2 achieves $\textbf{64.5}$ mAP on COCO test-dev, and establishes a new SoTA on the COCO leaderboard https://paperswithcode.com/sota/object-detection-on-coco
translated by 谷歌翻译
这项工作解决了中央机器学习问题的问题,即在分布(OOD)测试集上的性能降解问题。这个问题在基于医学成像的诊断系统中尤为明显,该系统似乎是准确的,但在新医院/数据集中进行测试时失败。最近的研究表明,该系统可能会学习快捷方式和非相关功能,而不是可推广的功能,即所谓的良好功能。我们假设对抗性训练可以消除快捷方式功能,而显着性训练可以滤除非相关功能。两者都是OOD测试集的性能降解的滋扰功能。因此,我们为深度神经网络制定了一种新颖的模型培训方案,以学习分类和/或检测任务的良好功能,以确保在OOD测试集上的概括性性能。实验结果定性和定量证明了我们使用基准CXR图像数据集在分类任务上的基准CXR图像数据集的出色性能。
translated by 谷歌翻译
评估对象图像的模糊对于提高对象识别和检索的性能至关重要。主要挑战在于缺乏具有可靠标签和有效学习策略的丰富图像。当前的数据集标记为有限且混乱的质量水平。为了克服这一限制,我们建议将成对图像之间的等级关系标记,而不是它们的质量水平,因为人类更容易标记,并建立具有可靠标签的大规模逼真的面部图像模糊评估数据集。基于此数据集,我们提出了一种仅以成对等级标签作为监督的方法来获得模糊分数。此外,为了进一步提高绩效,我们提出了一种基于四倍体排名一致性的自制方法,以更有效地利用未标记的数据。受监督和自我监督的方法构成了最终的半监督学习框架,可以端对端训练。实验结果证明了我们方法的有效性。
translated by 谷歌翻译
非负矩阵分解(NMF)已广泛用于降低机器学习的尺寸。但是,传统的NMF无法正确处理异常值,因此对噪声敏感。为了提高NMF的鲁棒性,本文提出了一种自适应加权NMF,它引入了权重,以强调每个数据点的不同重要性,因此降低了对噪声数据的算法敏感性。它与使用缓慢生长相似性度量的现有强大NMF大不相同。具体而言,提出了两种实现这一目标的策略:模糊加权技术和熵加权技术,两者都导致具有简单形式的迭代解决方案。实验结果表明,新方法在具有噪声的几个真实数据集上具有更健壮的特征表示,而不是进行噪声。
translated by 谷歌翻译
联合优化(FedOpt),在大量分布式客户端协作培训学习模型的目标是对联邦学习的重要性。 Fedopt的主要问题可归因于模型分歧和通信效率,这显着影响了性能。在本文中,我们提出了一种新方法,即Losac,更有效地从异构分布式数据中学习。它的关键算法洞察力是在{每个}常规本地模型更新之后本地更新全局全梯度的估计。因此,Losac可以使客户的信息以更紧凑的方式刷新。特别是,我们研究了Losac的收敛结果。此外,Losac的奖金是能够从最近的技术泄漏梯度(DLG)中捍卫信息泄漏。最后,实验已经验证了与最先进的FedOpt算法比较Losac的优越性。具体而言,Losac平均超过100美元的价格提高了通信效率,减轻了模型分歧问题,并配备了对抗DLG的防御能力。
translated by 谷歌翻译
非负矩阵分解(NMF)已被广泛用于学习数据的低维表示。但是,NMF对数据点的所有属性都同样关注,这不可避免地导致不准确的代表性。例如,在人面数据集中,如果图像在头上包含帽子,则应删除帽子,或者在矩阵分组期间应减少其对应属性的重要性。本文提出了一种名为熵权的NMF(EWNMF)的新型NMF,其为每个数据点的每个属性使用可优化的权重,以强调它们的重要性。通过向成本函数添加熵规范器来实现此过程,然后使用拉格朗日乘法器方法来解决问题。具有若干数据集的实验结果证明了该方法的可行性和有效性。我们在https://github.com/poisson-em/entropy-weighted-nmf提供我们的代码。
translated by 谷歌翻译
灵巧的操纵仍然是机器人技术中的一个空缺问题。为了协调研究界为解决这个问题的努力,我们提出了共同的基准。我们设计和构建了机器人平台,该平台托管在MPI上供智能系统托管,可以远程访问。每个平台由三个能够敏捷物体操纵的机器人手指组成。用户能够通过提交自动执行的代码(类似于计算群集)来远程控制平台。使用此设置,i)我们举办机器人竞赛,来自世界任何地方的团队访问我们的平台以应对具有挑战性的任务ii)我们发布了在这些比赛中收集的数据集(包括数百个机器人小时),而我们为研究人员提供了访问自己项目的这些平台。
translated by 谷歌翻译
Knowledge graph embedding (KGE), which maps entities and relations in a knowledge graph into continuous vector spaces, has achieved great success in predicting missing links in knowledge graphs. However, knowledge graphs often contain incomplete triples that are difficult to inductively infer by KGEs. To address this challenge, we resort to analogical inference and propose a novel and general self-supervised framework AnKGE to enhance KGE models with analogical inference capability. We propose an analogical object retriever that retrieves appropriate analogical objects from entity-level, relation-level, and triple-level. And in AnKGE, we train an analogy function for each level of analogical inference with the original element embedding from a well-trained KGE model as input, which outputs the analogical object embedding. In order to combine inductive inference capability from the original KGE model and analogical inference capability enhanced by AnKGE, we interpolate the analogy score with the base model score and introduce the adaptive weights in the score function for prediction. Through extensive experiments on FB15k-237 and WN18RR datasets, we show that AnKGE achieves competitive results on link prediction task and well performs analogical inference.
translated by 谷歌翻译
Temporal sentence grounding (TSG) aims to identify the temporal boundary of a specific segment from an untrimmed video by a sentence query. All existing works first utilize a sparse sampling strategy to extract a fixed number of video frames and then conduct multi-modal interactions with query sentence for reasoning. However, we argue that these methods have overlooked two indispensable issues: 1) Boundary-bias: The annotated target segment generally refers to two specific frames as corresponding start and end timestamps. The video downsampling process may lose these two frames and take the adjacent irrelevant frames as new boundaries. 2) Reasoning-bias: Such incorrect new boundary frames also lead to the reasoning bias during frame-query interaction, reducing the generalization ability of model. To alleviate above limitations, in this paper, we propose a novel Siamese Sampling and Reasoning Network (SSRN) for TSG, which introduces a siamese sampling mechanism to generate additional contextual frames to enrich and refine the new boundaries. Specifically, a reasoning strategy is developed to learn the inter-relationship among these frames and generate soft labels on boundaries for more accurate frame-query reasoning. Such mechanism is also able to supplement the absent consecutive visual semantics to the sampled sparse frames for fine-grained activity understanding. Extensive experiments demonstrate the effectiveness of SSRN on three challenging datasets.
translated by 谷歌翻译
Normalizing flow is a class of deep generative models for efficient sampling and density estimation. In practice, the flow often appears as a chain of invertible neural network blocks; to facilitate training, existing works have regularized flow trajectories and designed special network architectures. The current paper develops a neural ODE flow network inspired by the Jordan-Kinderleherer-Otto (JKO) scheme, which allows efficient block-wise training of the residual blocks and avoids inner loops of score matching or variational learning. As the JKO scheme unfolds the dynamic of gradient flow, the proposed model naturally stacks residual network blocks one-by-one, reducing the memory load and difficulty of performing end-to-end training of deep flow networks. We also develop adaptive time reparameterization of the flow network with a progressive refinement of the trajectory in probability space, which improves the model training efficiency and accuracy in practice. Using numerical experiments with synthetic and real data, we show that the proposed JKO-iFlow model achieves similar or better performance in generating new samples compared with existing flow and diffusion models at a significantly reduced computational and memory cost.
translated by 谷歌翻译